WorkServer White Paper

The GNP WorkServer

A NEBS Certified Modular Networking Platform for High Availability Applications


Table of Contents


Overview  

[
Previous ] [
Next ] [
T of C ]

The GNP WorkServer is a NEBS certified communications server for high-availability applications. Components include Sun SPARC 5, 20, or new UltraSPARC motherboards with integrated -48 VDC input power supplies, half- and full-height media options in removable carriers, including DAT, disks, and CD-ROM, various I/O solutions, and integrated RAID disk controllers. The WorkServer is a fully functional computing server based on industry standard Sun SPARC processing power.

The GNP WorkServer can be configured with either SunOS or Solaris 2.x operating systems and a variety of hardware and software options. These options include X.25, ISDN, T1, synchronous and asynchronous serial, FDDI, Fast Ethernet, ethernet, ATM and SCSI devices. An Intelligent Maintenance Network connects all components in an independent network for remote access and control. All of these are integrated with an intelligent Midplane which provides segmented networking, device interconnects, power, and alarming.


Design Goals

[
Previous ] [ Next
] [
T of C ]

NEBS Certified

[
Previous ] [
Next
] [
T of C ]

NEBS is the Bellcore standard for availability and reliability under adverse conditions. Almost all equipment deployed in Central Offices in the United States adheres to this strict specification for Earthquake/Shock/Vibration, Temperature and Humidity, Airborne Contamination, Acoustic Noise, and Electrical Protection. The GNP WorkServer meets or exceeds the NEBS standard in all of these areas. This means that even for applications outside the central office, the GNP WorkServer will remain operational even in the most extreme circumstances. NEBS is the standard for tested and proven reliability for telecommunications systems.

We can provide the NEBS specification for your review; however there are a few particularly salient points in this operation specification.

Open Systems

[
Previous ] [
Next ] [
T of C ]

To GNP, Open Systems means that our technology leverages standard off-the-shelf Sun SPARCengines and the standard Sun Solaris or SunOS operating system. We then repackage, design and integrate this technology into a fully NEBS certified system.

Because we use 100% Sun motherboards and software, system designers can develop and deploy new services on the same platform. There is no need for modifications to device drivers, applications or operating systems and all SBus modules are fully compatible. Our combination of Sun SPARC technology, NEBS certification, and our own enhanced OAM&P features combine to dramatically reduce time-to-market and increase system reliability.

Reliability and High Availability

[
Previous ] [ Next
] [
T of C ]

GNP creates a complete compute system that exceeds the Telco standards for high reliability and offers additional mechanisms for improving system uptime. Many of these systems, depending upon the specific application, are required to be down no longer that three minutes per year, for a reliability of 99.999% uptime. Designed to work in this environment, the WorkServer is the most robust solution available for standard Sun SPARC technology in the Central Office.

In addition to the reliability standards, there are also availability standards which GNP has implemented. Utilizing middleware that operates transparently and resides on top of the operating system and below the application software, WorkServer systems can automatically failover when certain event thresholds are exceeded. For example, if a particular CPU, disk drive, or process does not respond during a specified number of milliseconds then the system will revert to a known state of wellness&emdash;using a redundant component if necessary&emdash;and continue operation.

Maintainability  

[
Previous ] [
Next
] [
T of C ]

In addition to operating systems and networking expertise, GNP's core competencies include world class expertise in Operations, Administration, Maintenance and Provisioning (OAM&P). This means we are dedicated to designing for the entire product lifecycle.

During several studies conducted on maintaining systems that must be continuously available, the largest reason for failure was software faults. This was immediately followed by incorrect operations procedures and poor documentation. In fact, with the exception of disk drives, hardware failure was not a significant factor in system failure, rating below all other possible causes. And these drive failures were readily predictable and easily preventable through proper maintenance procedures.

As a result, GNP has designed its hardware systems, including media arrays, so that they can be maintained and upgraded over a ten year product lifecycle with simple and fail-safe field procedures.

The enhanced OAM&P functionality of the GNP WorkServer significantly reduces the risk of craft error. Advanced cable management, two-step removal procedures, and simple, modular design all make it easier for field personnel to service the WorkServer.


System Details

[
Previous ] [
Next ] [
T of C ]

The Shelf Unit and Midplane

[
Previous ] [
Next
] [
T of C ]

Every WorkServer system has at least one Shelf Unit and one Fan Unit. The Shelf Unit houses the Midplane and all WorkServer Modules such as CPUs, Media, SBus Expanders, and RAID Controllers. The Shelf Unit is the key to the enhanced OAM&P functionality such as advanced cable management, remote maintenance, and alarming.

Every Shelf Unit contains a Midplane which is the distribution mechanism for power, data I/O, and the Intelligent Maintenance Network. All WorkServer Modules, e.g. the CPU Module, Media Array Module, and I/O Module, vertically mount inside in the Shelf Unit, plugged into the Midplane. All modules are hot-swappable with visible and audible local and remote alarming capability.

The back of the Midplane is a patch panel used for interconnecting data signals from one Module to another. The patch panel enables reconfiguration without requiring a new midplane design for each application. Furthermore, isolated components, such as an individual media drive on a Media Array Module, or a CPU Module, can be replaced or interchanged without recabling. This advanced cable management significantly reduces the risk of craft error when performing maintenance procedures.

The Midplane consists of two symmetrical and electrically isolated half-midplanes. The half-midplanes are independently field-replaceable and hot-swappable. These Midplanes have no active components, which nearly eliminates the possibility of electronic failure. Even in this unlikely case, the hot-swappable design enables replacements of one half-midplane while the other remains operational.

The standard Shelf Unit for a 19" rack would hold a fourteen (14) slot Midplane. A 27" Telco rack would have a twenty (20) slot midplane. Each Module populates a given number of slots as is indicated in the following sections.

WorkServer Modules

[
Previous ] [
Next ] [
T of C ]

The GNP WorkServer is based upon a modular system which can be configured for a variety of applications within the same intelligent Shelf Unit. This allows great flexibility to configure the best solution for any specific application. All Modules are hot-swappable and plug replaceable.

Components such as CPU Modules, Media Array Modules, and I/O Modules, all leverage the same interface, power, and alarming technology, allowing them to seamlessly integrate with the highest level of reliability.

The Modules route power and maintenance signals to the Midplane. Also, the data for all Modules is routed through the Midplane to the patch panel for distribution to other Modules. This virtually eliminates external cables, which greatly facilitates regular maintenance and increases system reliability.

Any Module can be used in any slot in the Self Unit. This flexibility allows for a variety of configurations, supporting both CPU-intensive applications as well as systems which require large amounts of disk storage or I/O. For example, slot one through four can be allotted for a CPU Module, and slots five, six, seven and eight could hold Media Array Modules.

Every Module has a unique physical key, assigned to its part type in each particular configuration. This means that only the appropriate Module can be plugged into a particular slot once the system is configured. This ensures that craftspeople only plug the correct Module into the correct slot.

All Modules have their own power supplies and fuses. This keeps power supply problems distributed and easily replaced, removing the possibility that craftspeople will replace the wrong fuse in a centrally located fuse box.

CPU Module with Integrated Power Supply [ Previous
] [ Next
] [
T of C ]

GNP has designed a vertically mounted CPU Module that holds any Sun SPARC 5, 20, or UltraSPARC engine. All MBus modules and SBus slots are available and fully supported. This might include, for example, a fully configured system with two MBus modules, three SBus cards and an SBus expander that can provide an additional six SBus slots. The CPU Module is hot-swappable and plug replaceable.

The CPU Module also has allotted space for two 3.5" half-height SCSI devices. An example configuration could include a 1.0 Gigabyte boot disk, and floppy or selected RAID controller.

Most importantly, almost all data, such as ethernet, SCSI, and asynchronous RS-232 and RS-422, coming into or going out of the Sun SPARCengine, is routed through the Midplane to the patch panel (on the back of the midplane) for distribution to any selected Module.

In most cases, this eliminates almost all external cabling. Depending upon the application, however, there are certain configurations that might require external cabling such as using a Sun High Speed Serial Interface (HSI).

When removed from the Shelf Unit, the CPU Module also allows direct access to all SPARC components. This allows simple repairs and installation of memory, SBus and MBus modules, as well as the boot disk or other media devices on the carrier. This greatly facilitates field repair procedures.

The CPU Module also houses the power supply, which converts dual-feed -48 volt direct current into +5 VDC and ±12 VDC for the CPU module and media units. Its dual-feed, load-sharing design provides a robust power source for the entire CPU Module, including boot drive, floppy, and the Intelligent Maintenance Node.

The power supply's integrated maintenance node provides direct feedback to remote users about current conditions with alarms for under/over voltage, vibration, and temperature extremes. Alarm signals can trigger local and remote actions including lighting LEDs, emitting audio alerts, power cycling local units, and engaging software response for fail-over control and system shutdown.

The CPU Module occupies four slots on the Shelf Unit.

Media Array Module

[
Previous ] [
Next ] [
T of C ]

The Media Array Module houses two or three removable 3.5" half- or full-height or 5.25" half-height SCSI devices on a single vertically mounted card. Each SCSI drive is mounted on its own removable cartridge with a dedicated power supply and alarming circuitry.

These hot-swappable media units can be replaced or upgraded without disabling the other units in the Module or the CPU Module controlling the device. Any standard 3.5" or 5.25" SCSI device can be mounted for use in the Module, including disk drives, 4mm DAT drives, floppy drives, CDROM, and others.

Each media carrier that holds a SCSI device utilizes an intelligent, fail-safe release mechanism to prevent accidental removal. Under normal operation, the carrier is removed only after the CPU has properly dismounted the drive and the craftsperson has followed a specific two-step request for removal. This safety mechanism greatly reduces the chance of incidental downtime due to craft error. Under emergency conditions there is also a mechanism and procedure for immediately pulling the carrier out of the Module.

A Media Array Module occupies two slots in the Shelf Unit.

I/O Modules [
Previous ] [
Next ] [
T of C ]

Asynchronous Serial

GNP's commercial multiport serial product, SerialSmart, has been implemented into the WorkServer. There are a variety of configurations that enable us to provide from 16 to 64 asynchronous ports per SBus card. All ports have full hardware and software flow control. Individual ports can be run at up to 232.2 Kbaud. Y-connected configurations are also supported for redundant I/O paths.

Ethernet

24 Port Ethernet hubs can be added on single cards and plugged into the midplane. Eight ports are dedicated to the front panel, eight are dedicated to the Midplane interface, and the remaining eight can be configured for either front panel or midplane access. All monitoring and remote power cycling is supported.

I/O Options

Additional options include FDDI, CDDI, Fast Ethernet, T1/E1 crossconnects, and HSI, as well as connectivity to legacy systems. GNP's engineering staff performs a design review and recommends configurations based upon client needs.

I/O Modules occupy two to four slots in the Shelf Unit.

RAID Controller

[
Previous ] [
Next ] [
T of C ]

GNP OEMs several manufacturers' RAID controller technology depending upon the application. The CMD-5300 SCSI RAID Array has the following features:

SCSI Switch

[
Previous ] [
Next
] [
T of C ]

The SCSI switch is used to interface chains of SCSI devices to the control computers. It allows one "active" computer to attact itself to a chain of SCSI devices while properly terminating the SCSI interface on the "standby" computer. It supports Fast, Fast-Wide, SCSI-2, and differential SCSI.

Cooling

[
Previous ] [
Next ] [
T of C ]

For a 19" Shelf Unit, the fan system utilizes one or two two-fan Fan Units. For a 24" or 27" Shelf Unit, it uses one or two three-fan Fan Units. Four/Six fan configurations are used only when the fan is placed mid-rack, where two/three fans blow upward, and the other two/three blow downward. Each fan in the Fan Unit runs on isolated power supplies and provides 241 cubic feet per minute (CFM) airflow with zero backpressure, or an estimated 181 CFM with 0.25" H2O backpressure (typical). As a result, even in a worst case scenario with 6 SPARC 20 CPU Modules dissipating 1.8 kWatts, cooled by 2 fans, the change in temperature for air passing through the system is a mere 7.0° C, or in the case of the failure of one fan unit, 14.1°C. In a typical high availability configuration, with 2 CPUs and disk drives, only 2.1°C or 4.1°C in case of failure.

These Fan Units are ultra-quiet with an absolute decibel rating of only 55 dBA per fan, ensuring nearly silent operation. In case of failure, a baffle closes the fan opening to keep air from leaking out. This ensures that the remaining fans will continue to operate effectively as a forced-air cooling system. An alarm will also be sent via the Intelligent Maintenance Network.

Intelligent Maintenance Network

[
Previous ] [ Next
] [
T of C ]

The Intelligent Maintenance Network provides all of the alarm functionality for Central Office applications, as well as many enhancements. This includes alarm monitors for under and over voltages, over-current, temperature, vibration, and component failure anywhere in the system&endash;including the Fan Units and fuses. Power cycling, device configuration, and direct access to OpenBoot through ttyA are all managed through this network.

Integrated in the WorkServer Shelf Unit is an internal local operating network that connects all Modules. This Maintenance Network is completely isolated from all other circuitry including data and power. Even in the event of a complete power failure, the Intelligent Maintenance Network can automatically shutdown and trigger an alarm at local or remote sites. Using integrated Maintenance Nodes, all CPUs, media drives, and peripheral devices can be monitored by Maintenance Computers in real time. This enables remote and onsite staff to immediately identify problems for quick resolution and repair.

A straightforward command line interface composed of a few commands and parameters allows the user direct access to the maintenance network. For example,

CFG-PWR::005::ILIM=12.3A

will configure power supply number five to limit current at 12.3 amps. By first quarter 1996, GNP will offer an API for customizing direct software control for user applications.

Maintenance Computer and Interface

[
Previous ] [ Next
] [
T of C ]

Maintenance Computers can connect to the network at any point and receive information from any Maintenance Node connected to the network. This computer can be a Sun SPARCstation running Solaris 2.X and GNP Maintenance Software with direct control over all devices in the network, or it could be a remote user accessing the network via a modem, with the full capabilities of a local console.

Remote and local computers can be used simultaneously or independently and can be configured for various levels of access control. For instance, a local machine may be configured to automatically shut down or reconfigure due to environmental conditions, while a dial-in user can be restricted to monitoring functions only until authenticated through a dial back mechanism. A freely configurable maintenance network allows the system to be designed for the best implementation for each particular application.

Maintenance Nodes

[
Previous ] [ Next
] [
T of C ]

On each WorkServer Module is a Maintenance Node. This node is a state machine that monitors local alarms, front panel switches and environmental sensors. When an error or fault is detected, within milliseconds it transmits messages which are received by controller devices. If a Maintenance Computer does not respond, it will continue to transmit messages in application-specific intervals. The alarm status is also reflected by the LEDs on the front panel, which will remain in alarmed state until the condition which generated the alarm is resolved. This assists craftspeople in identifying, locating, and fixing the problem.

Each GNP Module contains an integrated Maintenance Node with full alarm and network capabilities. In addition, we can connect Maintenance Nodes to third party devices such as routers and modem banks which have an RS-232 monitor port.

Using the Maintenance Node, each Module can be power cycled on or off&emdash;either onsite, or by remote access. Further, each Module has a unique electronic serial number to identify itself. This can be used to verify proper component repair and maintenance procedures or simply to create an automated inventory system for simplified stocking and tracking of available spares and replacement parts. In addition, there are physical location IDs so that software can easily detect the exact location of a module when reporting alarms. All maintenance firmware is stored in flash memory that can be rewritten locally or remotely for future expansion or customization of maintenance capabilities.

CPU Module

Media Array Module

Ethernet Hub

Fan Unit

Maintenance Procedures

[
Previous ] [
Next
] [
T of C ]

GNP includes detailed maintenance procedures for the WorkServer platform. Full documentation and proper training are integral to ensuring that field personnel can effectively and efficiently maintain our systems. Since most maintenance procedures vary from application to application, GNP provides custom specifications for Operations, Administration, Maintenance, and Provisioning (OAM&P). Working in conjunction with the customer, GNP's engineering services architect and document procedures for craftspeople or support staff to optimize operations.

GNP also built several features into the WorkServer product to directly simplify maintenance procedures. Many of the details are addressed elsewhere in this document, including fail-safe carrier removal procedures, remote alarming and control, accessible CPU carrier design, physical keys for all carrier modules, and reduced cable management due to the patch bay and carrier design. By designing for the entire lifecycle of the product, we have significantly reduced the ongoing expense and trouble of maintaining our systems over time.


COMPONENT CONFIGURATION OVERVIEW

[
Previous ] [
T of C ]

Shelf Unit
Form Factor
Slots
Available
Notes
19" Rack/Frame 14 All Shelf Units are 18 su (17.64") high
24" Rack/Frame 18
27" Rack/Frame 20
Deskside 7
Custom 1-20

Component Slots
Required
Notes
CPU Module 4 or 5 SPARC 5, 20, or UltraSPARC Engine, with any MBus module, all SBus slots available, boot disk, memory, and media. Includes-48 VDC dual-feed power supply with alarming.
Media Array Module 2 Three removable half-height devices on single carrier each with individual power supplies.
RAID Disk Controller * Mountable as any 3.5" device: disk carrier, CPU Module, etc. (Power supply included when mounted in Disk Carrier)
Asynchronous SerialSmart Module 2 or 4 16 or 32 port serial communications controller

* 3.5" form factor

WorkServer White Paper, Revision D, 2/8/96


White Paper | Brochure | Technical Specifications
FAQ | More Info | Mailing List


*
Up

*
Home

*
Index